AAAI.2021 - AI for Social Impact

Total: 40

#1 Fairness in Influence Maximization through Randomization [PDF] [Copy] [Kimi]

Authors: Ruben Becker ; Gianlorenzo D'Angelo ; Sajjad Ghobadi ; Hugo Gilbert

The influence maximization paradigm has been used by researchers in various fields in order to study how information spreads in social networks. While previously the attention was mostly on efficiency, more recently fairness issues have been taken into account in this scope. In the present paper, we propose to use randomization as a mean for achieving fairness. While this general idea is not new, it has not been applied in the area of information spread in networks. Similar to previous works like Fish et al. (WWW '19) and Tsang et al. (IJCAI '19), we study the maximin criterion for (group) fairness. By allowing randomized solutions, we introduce two different variants of this problem. While the original deterministic maximin problem has been shown to be inapproximable, interestingly, we show that both probabilistic variants permit approximation algorithms with a constant multiplicative factor of 1-1/e plus an additive arbitrarily small error that is due to the simulation of the information spread. For an experimental study, we provide implementations of our methods and compare the achieved fairness values to existing methods. Non-surprisingly, the ex-ante values, i.e., minimum expected value of an individual (or group) to obtain the information, of the computed probabilistic strategies are significantly larger than the (ex-post) fairness values of previous methods. This confirms that studying fairness via randomization is a worthwhile direction. More surprisingly, we observe that even the ex-post fairness values, i.e., fairness values of sets sampled according to the probabilistic strategies, computed by our routines dominate over the fairness achieved by previous methods on most of the instances tested.

#2 Intelligent Recommendations for Citizen Science [PDF] [Copy] [Kimi]

Authors: Daniel Ben Zaken ; Kobi Gal ; Guy Shani ; Avi Segal ; Darlene Cavalier

Citizen science refers to scientific research that is carried out by volunteers, often in collaboration with professional scientists. The spread of the internet has allowed volunteers to contribute to citizen science projects in dramatically new ways while creating scientific value and gaining pedagogical and social benefits. Given the sheer size of available projects, finding the right project, which best suits the user preferences and capabilities, has become a major challenge and is essential for keeping volunteers motivated and active contributors. We address this challenge by developing a system for personalizing project recommendations which was fully deployed in the wild. We adapted several recommendation algorithms to the citizen science domain from the literature based on memory-based and model-based collaborative filtering approaches. The algorithms were trained on historical data of users' interactions in the SciStarter platform - a leading citizen science site -as well as their contributions to different projects. The trained algorithms were evaluated in SciStarter and involved hundreds of users who were provided with personalized recommendations for new projects they had not contributed to before. The results show that using the new recommendation system led people to increased participation in new SciStarter projects when compared to groups that were recommended projects using non-personalized recommendation approaches, and compared to behavior before recommendations. In particular, the group of volunteers receiving recommendations created by an SVD algorithm (matrix factorization) exhibited the highest levels of contributions to new projects, when compared to the other cohorts. A follow-up survey conducted with the SciStarter community confirmed that users felt that the recommendations matched their personal interests and goals. Based on these results, our recommendation system is now fully integrated into the SciStarter portal, positively affecting hundreds of users each week, and leading to social and educational benefits.

#3 Learning Augmented Methods for Matching: Improving Invasive Species Management and Urban Mobility [PDF] [Copy] [Kimi]

Authors: Johan Bjorck ; Qinru Shi ; Carrie Brown-Lima ; Jennifer Dean ; Angela Fuller ; Carla Gomes

With the success of machine learning, integrating learned models into real-world systems has become a critical challenge. Naively applying predictions to combinatorial optimization problems can incur high costs, which has motivated researchers to consider learning augmented algorithms that can make use of faulty or incomplete predictions. Inspired by two matching problems in computational sustainability where data is abundant, we consider the learning augmented min weight matching problem where some nodes are revealed online while others are known a priori, e.g., by being predicted by machine learning. We develop an algorithm that is able to make use of this extra information and provably improves upon pessimistic online algorithms. We evaluate our algorithm on two settings from computational sustainability -- the coordination of unreliable citizen scientists for invasive species management, and the matching between taxis and riders under uncertain trip duration predictions. In both cases, we perform extensive experiments on real-world datasets and find that our method outperforms baselines, showing how learning augmented algorithms can reliably improve solutions for problems in computational sustainability.

#4 Accelerating Ecological Sciences from Above: Spatial Contrastive Learning for Remote Sensing [PDF] [Copy] [Kimi]

Authors: Johan Bjorck ; Brendan H. Rappazzo ; Qinru Shi ; Carrie Brown-Lima ; Jennifer Dean ; Angela Fuller ; Carla Gomes

The rise of neural networks has opened the door for automatic analysis of remote sensing data. A challenge to using this machinery for computational sustainability is the necessity of massive labeled data sets, which can be cost-prohibitive for many non-profit organizations. The primary motivation for this work is one such problem; the efficient management of invasive species -- invading flora and fauna that are estimated to cause damages in the billions of dollars annually. As an ongoing collaboration with the New York Natural Heritage Program, we consider the use of unsupervised deep learning techniques for dimensionality reduction of remote sensing images, which can reduce sample complexity for downstream tasks and decreases the need for large labeled data sets. We consider spatially augmenting contrastive learning by training neural networks to correctly classify two nearby patches of a landscape as such. We demonstrate that this approach improves upon previous methods and naive classification for a large-scale data set of remote sensing images derived from invasive species observations obtained over 30 years. Additionally, we simulate deployment in the field via active learning and evaluate this method on another important challenge in computational sustainability -- landcover classification -- and again find that it outperforms previous baselines.

#5 Real-time Tropical Cyclone Intensity Estimation by Handling Temporally Heterogeneous Satellite Data [PDF] [Copy] [Kimi]

Authors: Boyo Chen ; Buo-Fu Chen ; Yun-Nung Chen

Analyzing big geophysical observational data collected by multiple advanced sensors on various satellite platforms promotes our understanding of the geophysical system. For instance, convolutional neural networks (CNN) have achieved great success in estimating tropical cyclone (TC) intensity based on satellite data with fixed temporal frequency (e.g., ~3 h). However, to achieve more timely (under 30 min) and accurate TC intensity estimates, a deep learning model is demanded to handle temporally-heterogeneous satellite observations. Specifically, infrared (IR1) and water vapor (WV) images are available within every 15 minute period, while passive microwave rain rate (PMW) is available about every 3 hours. Meanwhile, the visible (VIS) channel is severely affected by noise and sunlight intensity, making it difficult to be utilized. Therefore, we propose a novel framework that combines generative adversarial network (GAN) with CNN. The model utilizes all data during the training phase including VIS and PMW information and eventually uses only the high-frequent IR1 and WV data for providing intensity estimates during the predicting phase. Experimental results demonstrate that the hybrid GAN-CNN framework achieves comparable precision to the state-of-the-art models, while possessing the capability of increasing the maximum estimation frequency from 3 hours to less than 15 minutes. Please visit https://github.com/BoyoChen/CNN-GAN-TC for codes and implementation details.

#6 Detection and Prediction of Nutrient Deficiency Stress using Longitudinal Aerial Imagery [PDF] [Copy] [Kimi]

Authors: Saba Dadsetan ; Gisele Rose ; Naira Hovakimyan ; Jennifer Hobbs

Early, precise detection of nutrient deficiency stress (NDS) has key economic as well as environmental impact; precision application of chemicals in place of blanket application reduces operational costs for the growers while reducing the amount of chemicals which may enter the environment unnecessarily. Furthermore, earlier treatment reduces the amount of yield loss and therefore boosts crop production during a given season. With this in mind, we collect sequences of high-resolution aerial imagery and construct semantic segmentation models to detect and predict NDS across the field; our work sits at the intersection of agriculture, remote sensing, and deep learning. First, we establish a baseline for full-field detection of NDS and quantify the impact of pretraining, backbone architecture, input representation, and sampling strategy. We then quantify the amount of information available at different points in the season by building a single-timestamp model based on a U-Net. Next, we construct our proposed spatiotemporal architecture, which combines a U-Net with a convolutional LSTM to accurately detect regions of the field showing NDS; this approach has an impressive IOU score of 0.53. Finally, we show that this architecture can be trained to predict regions of the field which are expected to show NDS in a later flight- potentially more than three weeks in the future- maintaining an IOU score of 0.47-0.51 depending on how far in advance the prediction is made. We will also release a dataset which we believe will benefit the computer vision, remote sensing, and agriculture fields. This work contributes to the recent developments in deep learning for remote sensing and agriculture while addressing a key social challenge with implications for economics and sustainability.

#7 Graph Learning for Inverse Landscape Genetics [PDF] [Copy] [Kimi]

Authors: Prathamesh Dharangutte ; Christopher Musco

The problem of inferring unknown graph edges from numerical data at a graph's nodes appears in many forms across machine learning. We study a version of this problem that arises in the field of landscape genetics, where genetic similarity between organisms living in a heterogeneous landscape is explained by a weighted graph that encodes the ease of dispersal through that landscape. Our main contribution is an efficient algorithm for inverse landscape genetics, which is the task of inferring this graph from measurements of genetic similarity at different locations (graph nodes). Inverse landscape genetics is important in discovering impediments to species dispersal that threaten biodiversity and long-term species survival. In particular, it is widely used to study the effects of climate change and human development. Drawing on influential work that models organism dispersal using graph effective resistances (McRae 2006), we reduce the inverse landscape genetics problem to that of inferring graph edges from noisy measurements of these resistances, which can be obtained from genetic similarity data. Building on the NeurIPS 2018 work of Hoskins et al. (2018) on learning edges in social networks, we develop an efficient first-order optimization method for solving this problem. Despite its non-convex nature, experiments on synthetic and real genetic data establish that our method provides fast and reliable convergence, significantly outperforming existing heuristics used in the field. By providing researchers with a powerful, general purpose algorithmic tool, we hope our work will have a positive impact on accelerating work on landscape genetics.

#8 Harnessing Social Media to Identify Homeless Youth At-Risk of Substance Use [PDF] [Copy] [Kimi]

Authors: Zi-Yi Dou ; Anamika Barman-Adhikari ; Fei Fang ; Amulya Yadav

Homeless youth are a highly vulnerable population and report highly elevated rates of substance use. Prior work on mitigating substance use among homeless youth has primarily relied on survey data to get information about substance use among homeless youth, which can then be used to inform the design of targeted intervention programs. However, such survey data is often onerous to collect, is limited by its reliance on self-reports and retrospective recall, and quickly becomes dated. The advent of social media has provided us with an important data source for understanding the health behaviors of homeless youth. In this paper, we target this specific population and demonstrate how to detect substance use based on texts from social media. We collect ~135K Facebook posts and comments together with survey responses from a group of homeless youth and use this data to build novel substance use detection systems with machine learning and natural language processing techniques. Experimental results show that our proposed methods achieve ROC-AUC scores of ~0.77 on identifying certain kinds of substance use among homeless youth using Facebook conversations only, and ROC-AUC scores of ~0.83 when combined with answers to four survey questions that are not about their demographic characteristics or substance use. Furthermore, we investigate connections between the characteristics of people's Facebook posts and substance use and provide insights about the problem.

#9 Using Radio Archives for Low-Resource Speech Recognition: Towards an Intelligent Virtual Assistant for Illiterate Users [PDF] [Copy] [Kimi]

Authors: Moussa Doumbouya ; Lisa Einstein ; Chris Piech

For many of the 700 million illiterate people around the world, speech recognition technology could provide a bridge to valuable information and services. Yet, those most in need of this technology are often the most underserved by it. In many countries, illiterate people tend to speak only low-resource languages, for which the datasets necessary for speech technology development are scarce. In this paper, we investigate the effectiveness of unsupervised speech representation learning on noisy radio broadcasting archives, which are abundant even in low-resource languages. We make three core contributions. First, we release two datasets to the research community. The first, West African Radio Corpus, contains 142 hours of audio in more than 10 languages with a labeled validation subset. The second, West African Virtual Assistant Speech Recognition Corpus, consists of 10K labeled audio clips in four languages. Next, we share West African wav2vec, a speech encoder trained on the noisy radio corpus, and compare it with the baseline Facebook speech encoder trained on six times more data of higher quality. We show that West African wav2vec performs similarly to the baseline on a multilingual speech recognition task, and significantly outperforms the baseline on a West African language identification task. Finally, we share the first-ever speech recognition models for Maninka, Pular and Susu, languages spoken by a combined 10 million people in over seven countries, including six where the majority of the adult population is illiterate. Our contributions offer a path forward for ethical AI research to serve the needs of those most disadvantaged by the digital divide.

#10 Retrieve and Revise: Improving Peptide Identification with Similar Mass Spectra [PDF] [Copy] [Kimi]

Author: Zhengcong Fei

Tandem mass spectrometry is an indispensable technology for identification of proteins from complex mixtures. Accurate and sensitive analysis of large amounts of mass spectra data is a principal challenge in proteomics. Conventional deep learning-based peptide identification models usually adopt an encoder-decoder framework and generate target sequence from left to right without fully exploiting the global information. A few recent approaches seek to employ two-pass decoding, yet have limitations when facing the spectra filled with noise. In this paper, we propose a new paradigm for improved peptide identification, which first retrieves a similar mass spectrum from the database as a reference and then revise the matched sequence according to the difference information between the referenced spectrum and current context. The inspiration of design comes that the retrieved peptide-spectrum pair provides a good start point and indirect access to both past and future information, such that each revised amino acid can be produced with better noise perception and global understanding. Moreover, a disturb-based optimization process is introduced to sharpen the attention for difference vector with reinforcement learning before fed to decoder. Experimental results on several public datasets demonstrate that prominent performance boost is obtained with the proposed method. Remarkably, we achieve new state-of-the-art identification results on these datasets.

#11 K-N-MOMDPs: Towards Interpretable Solutions for Adaptive Management [PDF] [Copy] [Kimi]

Authors: Jonathan Ferrer-Mestres ; Thomas G. Dietterich ; Olivier Buffet ; Iadine Chades

In biodiversity conservation, adaptive management (AM) is the principal tool for decision making under uncertainty. AM problems are planning problems that can be modelled using Mixed Observability MDPs (MOMDPs). MOMDPs tackle decision problems where state variables are completely or partially observable. Unfortunately, MOMDP solutions (policy graphs) are too complex to be interpreted by human decision-makers. Here, we provide algorithms to solve K-N-MOMDPs, where K represents the maximum number of fully observable states and N represents the maximum number of alpha-vectors. Our algorithms calculate compact and more interpretable policy graphs from existing MOMDP models and solutions. We apply these algorithms to two computational sustainability applications: optimal release of bio-control agents to prevent dengue epidemics and conservation of the threatened bird species Gouldian finch. The methods dramatically reduce the number of states and alpha-vectors in MOMDP problems without significantly reducing their quality. The resulting policies have small policy graphs (4-6 nodes) that can be easily interpreted by human decision-makers.

#12 Predicting Flashover Occurrence using Surrogate Temperature Data [PDF] [Copy] [Kimi]

Authors: Eugene Yujun Fu ; Wai Cheong Tam ; Jun Wang ; Richard Peacock ; Paul A Reneke ; Grace Ngai ; Hong Va Leong ; Thomas Cleary

Fire fighter fatalities and injuries in the U.S. remain too high and fire fighting too hazardous. Until now, fire fighters rely only on their experience to avoid life-threatening fire events, such as flashover. In this paper, we describe the development of a flashover prediction model which can be used to warn fire fighters before flashover occurs. Specifically, we consider the use of a fire simulation program to generate a set of synthetic data and an attention-based bidirectional long short-term memory to learn the complex relationships between temperature signals and flashover conditions. We first validate the fire simulation program with temperature measurements obtained from full-scale fire experiments. Then, we generate a set of synthetic temperature data which account for the realis-tic fire and vent opening conditions in a multi-compartment structure. Results show that our proposed method achieves promising performance for prediction of flashover even when temperature data is completely lost in the room of fire origin. It is believed that the flashover prediction model can facilitate the transformation of fire fighting tactics from traditional experience-based decision marking to data-driven decision marking and reduce fire fighter deaths and injuries.

#13 Fair and Interpretable Algorithmic Hiring using Evolutionary Many Objective Optimization [PDF] [Copy] [Kimi]

Authors: Michael Geden ; Joshua Andrews

Hiring is a high-stakes decision-making process that balances the joint objectives of being fair and accurately selecting the top candidates. The industry standard method employs subject-matter experts to manually generate hiring algorithms; however, this method is resource intensive and finds sub-optimal solutions. Despite the recognized need for algorithmic hiring solutions to address these limitations, no reported method currently supports optimizing predictive objectives while complying to legal fairness standards. We present the novel application of Evolutionary Many-Objective Optimization (EMOO) methods to create the first fair, interpretable, and legally compliant algorithmic hiring approach. Using a proposed novel application of Dirichlet-based genetic operators for improved search, we compare state-of-the-art EMOO models (NSGA-III, SPEA2-SDE, bi-goal evolution) to expert solutions, verifying our results across three real world datasets across diverse organizational positions. Experimental results demonstrate the proposed EMOO models outperform human experts, consistently generate fairer hiring algorithms, and can provide additional lift when removing constraints required for human analysis.

#14 Abusive Language Detection in Heterogeneous Contexts: Dataset Collection and the Role of Supervised Attention [PDF] [Copy] [Kimi]

Authors: Hongyu Gong ; Alberto Valido ; Katherine M. Ingram ; Giulia Fanti ; Suma Bhat ; Dorothy L. Espelage

Abusive language is a massive problem in online social platforms. Existing abusive language detection techniques are particularly ill-suited to comments containing heterogeneous abusive language patterns, i.e., both abusive and non-abusive parts. This is due in part to the lack of datasets that explicitly annotate heterogeneity in abusive language. We tackle this challenge by providing an annotated dataset of abusive language in over 11,000 comments from YouTube. We account for heterogeneity in this dataset by separately annotating both the comment as a whole and the individual sentences that comprise each comment. We then propose an algorithm that uses a supervised attention mechanism to detect and categorize abusive content using multi-task learning. We empirically demonstrate the challenges of using traditional techniques on heterogeneous content and the comparative gains in performance of the proposed approach over state-of-the-art methods.

#15 Project RISE: Recognizing Industrial Smoke Emissions [PDF] [Copy] [Kimi]

Authors: Yen-Chia Hsu ; Ting-Hao (Kenneth) Huang ; Ting-Yao Hu ; Paul Dille ; Sean Prendi ; Ryan Hoffman ; Anastasia Tsuhlares ; Jessica Pachuta ; Randy Sargent ; Illah Nourbakhsh

Industrial smoke emissions pose a significant concern to human health. Prior works have shown that using Computer Vision (CV) techniques to identify smoke as visual evidence can influence the attitude of regulators and empower citizens to pursue environmental justice. However, existing datasets are not of sufficient quality nor quantity to train the robust CV models needed to support air quality advocacy. We introduce RISE, the first large-scale video dataset for Recognizing Industrial Smoke Emissions. We adopted a citizen science approach to collaborate with local community members to annotate whether a video clip has smoke emissions. Our dataset contains 12,567 clips from 19 distinct views from cameras that monitored three industrial facilities. These daytime clips span 30 days over two years, including all four seasons. We ran experiments using deep neural networks to establish a strong performance baseline and reveal smoke recognition challenges. Our survey study discussed community feedback, and our data analysis displayed opportunities for integrating citizen scientists and crowd workers into the application of Artificial Intelligence for Social Impact.

#16 Computational Visual Ceramicology: Matching Image Outlines to Catalog Sketches [PDF] [Copy] [Kimi]

Authors: Barak Itkin ; Lior Wolf ; Nachum Dershowitz

Field archeologists are called upon to identify potsherds, for which they rely on their professional experience and on reference works. We have developed a recognition method starting from images captured on site, which relies on the shape of the sherd's fracture outline. The method sets up a new target for deep-learning, integrating information from points along inner and outer surfaces to learn about shapes. Training the classifiers required tackling multiple challenges that arose on account of our working with real-world archeological data: paucity of labeled data; extreme imbalance between instances of different categories; and the need to avoid neglecting rare classes and to take note of minute distinguishing features of some classes. The scarcity of training data was overcome by using synthetically-produced virtual potsherds and by employing multiple data-augmentation techniques. A novel form of training loss allowed us to overcome classification problems caused by under-populated classes and inhomogeneous distribution of discriminative features.

#17 Prediction of Landfall Intensity, Location, and Time of a Tropical Cyclone [PDF] [Copy] [Kimi]

Authors: Sandeep Kumar ; Koushik Biswas ; Ashish Kumar Pandey

The prediction of the intensity, location and time of the landfall of a tropical cyclone well advance in time and with high accuracy can reduce human and material loss immensely. In this article, we develop a Long Short-Term memory based Recurrent Neural network model to predict intensity (in terms of maximum sustained surface wind speed), location (latitude and longitude), and time (in hours after the observation period) of the landfall of a tropical cyclone which originates in the North Indian ocean. The model takes as input the best track data of cyclone consisting of its location, pressure, sea surface temperature, and intensity for certain hours (from 12 to 36 hours) anytime during the course of the cyclone as a time series and then provide predictions with high accuracy. For example, using 24 hours data of a cyclone anytime during its course, the model provides state-of-the-art results by predicting landfall intensity, time, latitude, and longitude with a mean absolute error of 4.24 knots, 4.5 hours, 0.24 degree, and 0.37 degree respectively, which resulted in a distance error of 51.7 kilometers from the landfall location. We further check the efficacy of the model on three recent devastating cyclones Bulbul, Fani, and Gaja, and achieved better results than the test dataset.

#18 Court Opinion Generation from Case Fact Description with Legal Basis [PDF] [Copy] [Kimi]

Authors: Quanzhi Li ; Qiong Zhang

In this study, we proposed an approach to automatically generating court view from the fact description of a legal case. This is a text-to-text natural language generation problem, and it can help the automatic legal document generation. Due to the specialty of the legal domain, our model exploits the charge and law article information in the generation process, instead of utilizing just the fact description text. The BERT model is used as the encoder and a Transformer architecture is used as decoder. To smoothly integrate these two parts together, we employ two separate optimizers for the two components during the training process. The experiments on two data sets of Chinese legal cases show that our approach outperforms other methods.

#19 Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images [PDF] [Copy] [Kimi]

Authors: Kang Liu ; Benjamin Tan ; Siddharth Garg

Unprecedented data collection and sharing have exacerbated privacy concerns and led to increasing interest in privacy-preserving tools that remove sensitive attributes from images while maintaining useful information for other tasks. Currently, state-of-the-art approaches use privacy-preserving generative adversarial networks (PP-GANs) for this purpose, for instance, to enable reliable facial expression recognition without leaking users' identity. However, PP-GANs do not offer formal proofs of privacy and instead rely on experimentally measuring information leakage using classification accuracy on the sensitive attributes of deep learning (DL)-based discriminators. In this work, we question the rigor of such checks by subverting existing privacy-preserving GANs for facial expression recognition. We show that it is possible to hide the sensitive identification data in the sanitized output images of such PP-GANs for later extraction, which can even allow for reconstruction of the entire input images, while satisfying privacy checks. We demonstrate our approach via a PP-GAN-based architecture and provide qualitative and quantitative evaluations using two public datasets. Our experimental results raise fundamental questions about the need for more rigorous privacy checks of PP-GANs, and we provide insights into the social impact of these.

#20 Mitigating Political Bias in Language Models through Reinforced Calibration [PDF] [Copy] [Kimi]

Authors: Ruibo Liu ; Chenyan Jia ; Jason Wei ; Guangxuan Xu ; Lili Wang ; Soroush Vosoughi

Current large-scale language models can be politically biased as a result of the data they are trained on, potentially causing serious problems when they are deployed in real-world settings. In this paper, we describe metrics for measuring political bias in GPT-2 generation and propose a reinforcement learning (RL) framework for mitigating political biases in generated text. By using rewards from word embeddings or a classifier, our RL framework guides debiased generation without having access to the training data or requiring the model to be retrained. In empirical experiments on three attributes sensitive to political bias (gender, location, and topic), our methods reduced bias according to both our metrics and human evaluation, while maintaining readability and semantic coherence.

#21 HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection [PDF] [Copy] [Kimi]

Authors: Binny Mathew ; Punyajoy Saha ; Seid Muhie Yimam ; Chris Biemann ; Pawan Goyal ; Animesh Mukherjee

Hate speech is a challenging issue plaguing the online social media. While better models for hate speech detection are continuously being developed, there is little research on the bias and interpretability aspects of hate speech. In this paper, we introduce HateXplain, the first benchmark hate speech dataset covering multiple aspects of the issue. Each post in our dataset is annotated from three different perspectives: the basic, commonly used 3-class classification (i.e., hate, offensive or normal), the target community (i.e., the community that has been the victim of hate speech/offensive speech in the post), and the rationales, i.e., the portions of the post on which their labelling decision (as hate, offensive or normal) is based. We utilize existing state-of-the-art models and observe that even models that perform very well in classification do not score high on explainability metrics like model plausibility and faithfulness. We also observe that models, which utilize the human rationales for training, perform better in reducing unintended bias towards target communities. We have made our code and dataset public for other researchers.

#22 Goten: GPU-Outsourcing Trusted Execution of Neural Network Training [PDF] [Copy] [Kimi]

Authors: Lucien K. L. Ng ; Sherman S. M. Chow ; Anna P. Y. Woo ; Donald P. H. Wong ; Yongjun Zhao

Deep learning unlocks applications with societal impacts, e.g., detecting child exploitation imagery and genomic analysis of rare diseases. Deployment, however, needs compliance with stringent privacy regulations. Training algorithms that preserve the privacy of training data are in pressing need. Purely cryptographic approaches can protect privacy, but they are still costly, even when they rely on two or more non-colluding servers. Seemingly-"trivial" operations in plaintext quickly become prohibitively inefficient when a series of them are "crypto-processed," e.g., (dynamic) quantization for ensuring the intermediate values would not overflow. Slalom, recently proposed by Tramer and Boneh, is the first solution that leverages both GPU (for efficient batch computation) and a trusted execution environment (TEE) (for minimizing the use of cryptography). Roughly, it works by a lot of pre-computation over known and fixed weights, and hence it only supports private inference. Five related problems for private training are left unaddressed. Goten, our privacy-preserving training and prediction framework, tackles all five problems simultaneously via our careful design over the "mismatched" cryptographic and GPU data types (due to the tension between precision and efficiency) and our round-optimal GPU-outsourcing protocol (hence minimizing the communication cost between servers). It 1) stochastically trains a low-bitwidth yet accurate model, 2) supports dynamic quantization (a challenge left by Slalom), 3) minimizes the memory-swapping overhead of the memory-limited TEE and its communication with GPU, 4) crypto-protects the (dynamic) model weight from untrusted GPU, and 5) outperforms a pure-TEE system, even without pre-computation (needed by Slalom). As a baseline, we build CaffeScone that secures Caffe using TEE but not GPU; Goten shows a 6.84x speed-up of the whole VGG-11. Goten also outperforms Falcon proposed by Wagh et al., the latest secure multi-server cryptographic solution, by 132.64x using VGG-11. Lastly, we demonstrate Goten's efficacy in training models for breast cancer diagnosis over sensitive images.

#23 A Universal 2-state n-action Adaptive Management Solver [PDF] [Copy] [Kimi]

Authors: Luz Valerie Pascal ; Marianne Akian ; Sam Nicol ; Iadine Chades

In poor data and urgent decision-making applications, managers need to make decisions without complete knowledge of the system dynamics. In biodiversity conservation, adaptive management (AM) is the principal tool for decision-making under uncertainty. AM can be solved using simplified Mixed Observable Markov Decision Processes called hidden model MDPs (hmMDPs) when the unknown dynamics are assumed stationary. hmMDPs provide optimal policies to AM problems by augmenting the MDP state space with an unobservable state variable representing a finite set of predefined models. A drawback in formalising an AM problem is that experts are often solicited to provide this predefined set of models by specifying the transition matrices. Expert elicitation is a challenging and time-consuming process that is prone to biases, and a key assumption of hmMDPs is that the true transition matrix will be included in the candidate model set. We propose an original approach to build a hmMDP with a universal set of predefined models that is capable of solving any 2-state n-action AM problem. Our approach uses properties of the transition matrices to build the model set and is independent of expert input, removing the potential for expert error in the optimal solution. We provide analytical formulations to derive the minimum set of models to include into an hmMDP to solve any AM problems with 2 states and n actions. We assess our universal AM algorithm on two species conservation case studies from Australia and randomly generated problems.

#24 We Don't Speak the Same Language: Interpreting Polarization through Machine Translation [PDF] [Copy] [Kimi]

Authors: Ashiqur R. KhudaBukhsh ; Rupak Sarkar ; Mark S. Kamlet ; Tom Mitchell

Polarization among US political parties, media and elites is a widely studied topic. Prominent lines of prior research across multiple disciplines have observed and analyzed growing polarization in social media. In this paper, we present a new methodology that offers a fresh perspective on interpreting polarization through the lens of machine translation. With a novel proposition that two sub-communities are speaking in two different "languages", we demonstrate that modern machine translation methods can provide a simple yet powerful and interpretable framework to understand the differences between two (or more) large-scale social media discussion data sets at the granularity of words. Via a substantial corpus of 86.6 million comments by 6.5 million users on over 200,000 news videos hosted by YouTube channels of four prominent US news networks, we demonstrate that simple word-level and phrase-level translation pairs can reveal deep insights into the current political divide -- what is "black lives matter" to one can be "all lives matter" to the other.

#25 RainBench: Towards Data-Driven Global Precipitation Forecasting from Satellite Imagery [PDF] [Copy] [Kimi]

Authors: Christian Schroeder de Witt ; Catherine Tong ; Valentina Zantedeschi ; Daniele De Martini ; Alfredo Kalaitzis ; Matthew Chantry ; Duncan Watson-Parris ; Piotr Bilinski

Extreme precipitation events, such as violent rainfall and hail storms, routinely ravage economies and livelihoods around the developing world. Climate change further aggravates this issue. Data-driven deep learning approaches could widen the access to accurate multi-day forecasts, to mitigate against such events. However, there is currently no benchmark dataset dedicated to the study of global precipitation forecasts. In this paper, we introduce RainBench, a new multi-modal benchmark dataset for data-driven precipitation forecasting. It includes simulated satellite data, a selection of relevant meteorological data from the ERA5 reanalysis product, and IMERG precipitation data. We also release PyRain, a library to process large precipitation datasets efficiently. We present an extensive analysis of our novel dataset and establish baseline results for two benchmark medium-range precipitation forecasting tasks. Finally, we discuss existing data-driven weather forecasting methodologies and suggest future research avenues.